46 research outputs found

    Real-Time Detection and Tracking using Wireless Sensor Networks (Information Sheet)

    No full text
    To develop and deploy a detection and tracking system based on wireless sensor networks. Real-Time detection and tracking is achieved using Wireless Sensor Networks Hardware. The system is envisioned to be able to effectively handle multiple arbitrarily moving targets

    Keep Your Eye on the Best: Contrastive Regression Transformer for Skill Assessment in Robotic Surgery

    Get PDF
    This letter proposes a novel video-based, contrastive regression architecture, Contra-Sformer, for automated surgical skill assessment in robot-assisted surgery. The proposed framework is structured to capture the differences in the surgical performance, between a test video and a reference video which represents optimal surgical execution. A feature extractor combining a spatial component (ResNet-18), supervised on frame-level with gesture labels, and a temporal component (TCN), generates spatio-temporal feature matrices of the test and reference videos. These are then fed into an action-aware Transformer with multi-head attention that produces inter-video contrastive features at frame level, representative of the skill similarity/deviation between the two videos. Moments of sub-optimal performance can be identified and temporally localized in the obtained feature vectors, which are ultimately used to regress the manually assigned skill scores. Validated on the JIGSAWS dataset, Contra-Sformer achieves competitive performance (Spearman 0.65 - 0.89), with a normalized mean absolute error between 5.8% - 13.4% on all tasks and across validation setups. Source code and models are available at https://github.com/anastadimi/Contra-Sformer.git

    MSDESIS: Multi-task stereo disparity estimation and surgical instrument segmentation

    Get PDF
    Reconstructing the 3D geometry of the surgical site and detecting instruments within it are important tasks for surgical navigation systems and robotic surgery automation. Traditional approaches treat each problem in isolation and do not account for the intrinsic relationship between segmentation and stereo matching. In this paper, we present a learning-based framework that jointly estimates disparity and binary tool segmentation masks. The core component of our architecture is a shared feature encoder which allows strong interaction between the aforementioned tasks. Experimentally, we train two variants of our network with different capacities and explore different training schemes including both multi-task and single-task learning. Our results show that supervising the segmentation task improves our network's disparity estimation accuracy. We demonstrate a domain adaptation scheme where we supervise the segmentation task with monocular data and achieve domain adaptation of the adjacent disparity task, reducing disparity End-Point-Error and depth mean absolute error by 77.73% and 61.73% respectively compared to the pre-trained baseline model. Our best overall multi-task model, trained with both disparity and segmentation data in subsequent phases, achieves 89.15% mean Intersection-over-Union in RIS and 3.18 millimetre depth mean absolute error in SCARED test sets. Our proposed multi-task architecture is real-time, able to process (1280x1024) stereo input and simultaneously estimate disparity maps and segmentation masks at 22 frames per second. The model code and pre-trained models are made available: https://github.com/dimitrisPs/msdesis

    On the detection of myocardial scar based on ECG/VCG analysis

    No full text
    In this paper, we address the problem of detecting the presence of myocardial scar from standard ECG/VCG recordings, giving effort to develop a screening system for the early detection of scar in the point-of-care. Based on the pathophysiological implications of scarred myocardium, which results in disordered electrical conduction, we have implemented four distinct ECG signal processing methodologies in order to obtain a set of features that can capture the presence of myocardial scar. Two of these methodologies: a.) the use of a template ECG heartbeat, from records with scar absence coupled with Wavelet coherence analysis and b.) the utilization of the VCG are novel approaches for detecting scar presence. Following, the pool of extracted features is utilized to formulate an SVM classification model through supervised learning. Feature selection is also employed to remove redundant features and maximize the classifier's performance. Classification experiments using 260 records from three different databases reveal that the proposed system achieves 89.22% accuracy when applying 10- fold cross validation, and 82.07% success rate when testing it on databases with different inherent characteristics with similar levels of sensitivity (76%) and specificity (87.5%)

    An Investigation into the Accuracy of Calculating upper Body Joint Angles Using MARG Sensors

    Get PDF
    We investigate magnetic, angular rate, and gravity (MARG) sensor modules for deriving shoulder, elbow and lumbar joint angles of the human body. We use three tri-axial MARG sensors, placed proximal to the wrist and elbow and centrally on the chest, and employ a quaternion-based Unscented Kalman Filter technique to estimate orientations from the sensor data, from which joint angles are calculated based on a simple model of the arm. Tests reveal that the method has the potential to accurately derive specific angles. When compared with a camera-based system, a root mean square difference error between 5° - 15° was observed

    Surgical knot training in ophthalmic surgery: Skill assessment with eye-tracking

    Get PDF
    Suturing is a fundamental task in ophthalmic surgery. Focused training is necessary to master technical (tissue handling, knot tying) and cognitive (appropriate selection of instruments, forward planning) skills and develop a high level of hand-eye coordination required for ophthalmic microsurgical procedures. Formulating novel objective measures of operational performance will be beneficial for training in ophthalmic microsurgery. Capturing eye movements and points of focus while performing surgical tasks can provide meaningful information to assess the operator’s technical and cognitive skills and overall performance. The locations of gaze focus and spatial distribution of fixations embed valuable information for assessing the use of instruments, the sequence and quality in executing subtasks, and possibly the level of hand-eye coordination the operator demonstrates. This study explores eye-tracking for developing performance metrics for suturing tasks in ophthalmic surgery and preliminary analysis focuses on the total duration of executing a surgical suture and its subtasks. It also introduces the spatial distribution of fixations as a feature to characterize the level of surgical expertise. Eye-tracking has been used as a tool for skill analysis in a variety of different surgical applications. Copogna et al. [1], compared anesthesiologists of different expertise levels performing an epidural block. Attentional heat-maps and gaze plots showed different gaze dispersion between the groups. Causer et al., showed that quiet eye training significantly improved learning of surgical knot tying compared to a traditional technical approach [2]. In [3], expert and novice neurosurgeons performing under a surgical microscope were examined, concluding that experts spend more time fixating on the region of interest before performing an action. Lee et al., used eye-tracking data to identify gaze patterns and blind spots in a real-time EGD [4]. While efforts have been made to analyze gaze patterns [3],[4], it has yet to be developed a metric that can be used to statistically compare the spatial distributions of gaze focus points, a rather useful tool to evaluate the skill level of groups with different expertise

    Surgical knot training in ophthalmic surgery: Skill assessment with eye-tracking

    Get PDF
    Suturing is a fundamental task in ophthalmic surgery. Training is necessary to master technical as well as cognitive skills and develop a high level of hand-eye coordination. Formulating novel objective measures of operational performance will be beneficial for training and computer-based image guidance. Capturing eye movements while performing surgical tasks can provide meaningful information to access operator’s skill and overall performance. This study explores: eye tracking for developing performance metrics for suturing tasks and focuses on time-based metrics and the spatial distribution of fixations

    Post-Operative Medium- and Long-Term Endocrine Outcomes in Patients with Non-Functioning Pituitary Adenomas—Machine Learning Analysis

    Get PDF
    Post-operative endocrine outcomes in patients with non-functioning pituitary adenoma (NFPA) are variable. The aim of this study was to use machine learning (ML) models to better predict medium- and long-term post-operative hypopituitarism in patients with NFPAs. We included data from 383 patients who underwent surgery with or without radiotherapy for NFPAs, with a follow-up period between 6 months and 15 years. ML models, including k-nearest neighbour (KNN), support vector machine (SVM), and decision tree models, showed a superior ability to predict panhypopituitarism compared with non-parametric statistical modelling (mean accuracy: 0.89; mean AUC-ROC: 0.79), with SVM achieving the highest performance (mean accuracy: 0.94; mean AUC-ROC: 0.88). Pre-operative endocrine function was the strongest feature for predicting panhypopituitarism within 1 year post-operatively, while endocrine outcomes at 1 year post-operatively supported strong predictions of panhypopituitarism at 5 and 10 years post-operatively. Other features found to contribute to panhypopituitarism prediction were age, volume of tumour, and the use of radiotherapy. In conclusion, our study demonstrates that ML models show potential in predicting post-operative panhypopituitarism in the medium and long term in patients with NFPM. Future work will include incorporating additional, more granular data, including imaging and operative video data, across multiple centres

    Shifted-windows transformers for the detection of cerebral aneurysms in microsurgery

    Get PDF
    Purpose: Microsurgical Aneurysm Clipping Surgery (MACS) carries a high risk for intraoperative aneurysm rupture. Automated recognition of instances when the aneurysm is exposed in the surgical video would be a valuable reference point for neuronavigation, indicating phase transitioning and more importantly designating moments of high risk for rupture. This article introduces the MACS dataset containing 16 surgical videos with frame-level expert annotations and proposes a learning methodology for surgical scene understanding identifying video frames with the aneurysm present in the operating microscope’s field-of-view./ Methods: Despite the dataset imbalance (80% no presence, 20% presence) and developed without explicit annotations, we demonstrate the applicability of Transformer-based deep learning architectures (MACSSwin-T, vidMACSSwin-T) to detect the aneurysm and classify MACS frames accordingly. We evaluate the proposed models in multiple-fold cross-validation experiments with independent sets and in an unseen set of 15 images against 10 human experts (neurosurgeons)./ Results: Average (across folds) accuracy of 80.8% (range 78.5–82.4%) and 87.1% (range 85.1–91.3%) is obtained for the image- and video-level approach, respectively, demonstrating that the models effectively learn the classification task. Qualitative evaluation of the models’ class activation maps shows these to be localized on the aneurysm’s actual location. Depending on the decision threshold, MACSWin-T achieves 66.7–86.7% accuracy in the unseen images, compared to 82% of human raters, with moderate to strong correlation./ Conclusions: Proposed architectures show robust performance and with an adjusted threshold promoting detection of the underrepresented (aneurysm presence) class, comparable to human expert accuracy. Our work represents the first step towards landmark detection in MACS with the aim to inform surgical teams to attend to high-risk moments, taking precautionary measures to avoid rupturing
    corecore